In this tutorial, we show an example of a prototypical task that DeepDive is often applied to: extraction of structured information from unstructured or 'dark' data such as web pages, text documents, images, etc. While DeepDive can be used as a more general platform for statistical learning and data processing, most of the tooling described herein has been built for this type of use case, based on our experience of successfully applying DeepDive to a variety of real-world problems of this type.
In this setting, our goal is to take in a set of unstructured (and/or structured) inputs, and populate a relational database table with extracted outputs, along with marginal probabilities for each extraction representing DeepDive's confidence in the extraction. More formally, we write a DeepDive application to extract mentions of relations and their constituent entities or attributes, according to a specified schema; this task is often referred to as relation extraction.* Accordingly, we'll walk through an example scenario where we wish to extract mentions of two people being spouses from news articles.
The high-level steps we'll follow are:
Data processing. First, we'll load the raw corpus, add NLP markups, extract a set of candidate relation mentions, and a sparse feature representation of each.
Distant supervision with data and rules. Next, we'll use various strategies to provide supervision for our dataset, so that we can use machine learning to learn the weights of a model.
Learning and inference: model specification. Then, we'll specify the high-level configuration of our model.
Error analysis and debugging. Finally, we'll show how to use DeepDive's labeling, error analysis and debugging tools.
*Note the distinction between extraction of true, i.e., factual, relations and extraction of mentions of relations. In this tutorial, we do the latter, however DeepDive supports further downstream methods for tackling the former task in a principled manner.
Whenever something isn't clear, you can always refer to the complete example code at examples/spouse/
that contains everything shown in this document.
First of all, let's make sure DeepDive is installed and can be used from this notebook. See DeepDive installation guide for more details.
In [1]:
# PATH needs correct setup to use DeepDive
import os; PWD=os.getcwd(); HOME=os.environ["HOME"]; PATH=os.environ["PATH"]
# home directory installation
%env PATH=$HOME/local/bin:$PATH
# notebook-local installation
%env PATH=$PWD/deepdive/bin:$PATH
!type deepdive
no_deepdive_found = !type deepdive >/dev/null
if no_deepdive_found: # install it next to this notebook
!bash -c 'PREFIX="$PWD"/deepdive bash <(curl -fsSL git.io/getdeepdive) deepdive_from_release'
We need to make sure this IPython/Jupyter notebook will work correctly with DeepDive:
In [2]:
# check if notebook kernel was launched in a Unicode locale
import locale; LC_CTYPE = locale.getpreferredencoding()
if LC_CTYPE != "UTF-8":
raise EnvironmentError("Notebook is running in '%s' encoding not compatible with DeepDive's Unicode output.\n\nPlease restart notebook in a UTF-8 locale with a command like the following:\n\n LC_ALL=en_US.UTF-8 jupyter notebook" % (LC_CTYPE))
In [3]:
%%file app.ddlog
## Random variable to predict #################################################
# This application's goal is to predict whether a given pair of person mention
# are indicating a spouse relationship or not.
has_spouse?(
p1_id text,
p2_id text
).
In this notebook, we are going to write our application in this app.ddlog
one part at a time.
We can check if the code make sense by asking DeepDive to compile it.
DeepDive automatically compiles our application whenever we execute things after making changes, but we can also do this manually by running:
In [4]:
!deepdive compile
Next, DeepDive will store all data—input, intermediate, output, etc.—in a relational database.
Currently, Postgres and Greenplum are supported.
For operating at a larger scale, Greenplum is strongly recommended.
To set the location of this database, we need to configure a URL in the db.url
file, e.g.:
In [5]:
!echo 'postgresql://'"${PGHOST:-localhost}"'/deepdive_spouse_$USER' >db.url
If you have no running database yet, the following commands can quickly bring up a new PostgreSQL server to be used with DeepDive, storing all data at run/database/postgresql
next to this notebook.
In [6]:
no_database_running = !deepdive db is_ready || echo $?
if no_database_running:
PGDATA = "run/database/postgresql"
!mkdir -p $PGDATA; test -s $PGDATA/PG_VERSION || pg_ctl init -D $PGDATA >/dev/null
!nohup pg_ctl -D $PGDATA -l $PGDATA/logfile start >/dev/null
Note: DeepDive will drop and then create this database if run from scratch—beware of pointing to an existing populated one!
In [7]:
!deepdive redo init/app
In this section, we'll generate the traditional inputs of a statistical learning-type problem: candidate spouse relations, represented by a set of features, which we will aim to classify as actual relation mentions or not.
We'll do this in four basic steps:
Our first task is to download and load the raw text of a corpus of news articles provided by Signal Media into an articles
table in our database.
Keeping the identifier of each article and its content in the table would be good enough.
We can tell DeepDive to do this by declaring the schema of this articles
table in our app.ddlog
file; we add the following lines:
In [8]:
%%file -a app.ddlog
## Input Data #################################################################
articles(
id text,
content text
).
DeepDive can use a script's output as a data source for loading data into the table if we follow a simple naming convention.
We create a simple shell script at input/articles.tsj.sh
that outputs the news articles in TSJ format (tab-separated JSONs) from the downloaded corpus.
In [9]:
!mkdir -p input
In [10]:
%%file input/articles.tsj.sh
#!/usr/bin/env bash
set -euo pipefail
cd "$(dirname "$0")"
corpus=signalmedia/signalmedia-1m.jsonl
[[ -e "$corpus" ]] || {
echo "ERROR: Missing $PWD/$corpus"
echo "# Please Download it from http://research.signalmedia.co/newsir16/signal-dataset.html"
echo
echo "# Alternatively, use our sampled data by running:"
echo "deepdive load articles input/articles-100.tsv.bz2"
echo
echo "# Or, skipping all NLP markup processes by running:"
echo "deepdive create table sentences"
echo "deepdive load sentences"
echo "deepdive mark done sentences"
false
} >&2
cat "$corpus" |
#grep -E 'wife|husband|married' |
#head -100 |
jq -r '[.id, .content] | map(@json) | join("\t")'
We need to mark the script as an executable so DeepDive can actually execute it:
In [11]:
!chmod +x input/articles.tsj.sh
The aforementioned script reads a sample of the corpus (provided as lines of JSON objects), and then using the jq language extracts the fields id
(for article identifier) and content
from each entry and format those into TSJ.
We can uncomment the grep
or head
lines in between and apply some naive filter to subsample articles.
Now, we tell DeepDive to execute the steps to load the articles
table using the input/articles.tsj.sh
script.
You must have the full corpus downloaded at input/signalmedia/signalmedia-1m.jsonl
for the following to finish correctly.
In [12]:
!deepdive redo articles
Alternatively, a sample of 100 and 1000 articles can be downloaded from GitHub and loaded into DeepDive with the following command:
In [13]:
NUM_ARTICLES = 100
ARTICLES_FILE = "articles-%d.tsj.bz2" % NUM_ARTICLES
articles_not_done = !deepdive done articles || date
if articles_not_done:
!cd input && curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/input/"$ARTICLES_FILE
!deepdive reload articles input/$ARTICLES_FILE
After DeepDive finishes creating the table and then fetching and loading the data, we can take a look at the loaded data using the following deepdive query
command, which enumerates the values for the id
column of the articles
table:
In [14]:
!deepdive query '|10 ?- articles(id, _).'
Next, we'll use Stanford's CoreNLP natural language processing (NLP) system to add useful markups and structure to our input data. This step will split up our articles into sentences and their component tokens (roughly, the words). Additionally, we'll get lemmas (normalized word forms), part-of-speech (POS) tags, named entity recognition (NER) tags, and a dependency parse of the sentence.
Let's first declare the output schema of this step in app.ddlog
:
In [15]:
%%file -a app.ddlog
## NLP markup #################################################################
sentences(
doc_id text,
sentence_index int,
tokens json,
lemmas json,
pos_tags json,
ner_tags json,
doc_offsets json,
dep_types json,
dep_tokens json
).
Next, we declare a DDlog function which takes in the doc_id
and content
for an article and returns rows conforming to the sentences schema we just declared, using the user-defined function (UDF) in udf/nlp_markup.sh
.
We specify that this nlp_markup
function should be run over each row from articles
, and the output appended to sentences
:
In [16]:
%%file -a app.ddlog
function nlp_markup over (
doc_id text,
content text
) returns rows like sentences
implementation "udf/nlp_markup.sh" handles tsj lines.
sentences += nlp_markup(doc_id, content) :-
articles(doc_id, content).
This UDF udf/nlp_markup.sh
is a Bash script which uses our own wrapper around CoreNLP.
In [17]:
!mkdir -p udf
In [18]:
%%file udf/nlp_markup.sh
#!/usr/bin/env bash
# Parse documents in tab-separated JSONs input stream with CoreNLP
#
# $ deepdive corenlp install
# $ deepdive corenlp start
# $ deepdive env udf/nlp_markup.sh
# $ deepdive corenlp stop
##
set -euo pipefail
cd "$(dirname "$0")"
# some configuration knobs for CoreNLP
: ${CORENLP_PORT:=$(deepdive corenlp unique-port)} # a CoreNLP server started ahead of time is shared across parallel UDF processes
# See: http://stanfordnlp.github.io/CoreNLP/annotators.html
: ${CORENLP_ANNOTATORS:="
tokenize
ssplit
pos
ner
lemma
depparse
"}
export CORENLP_PORT
export CORENLP_ANNOTATORS
# make sure CoreNLP server is available
deepdive corenlp is-running || {
echo >&2 "PLEASE MAKE SURE YOU HAVE RUN: deepdive corenlp start"
false
}
# parse input with CoreNLP and output a row for every sentence
deepdive corenlp parse-tsj docid+ content=nlp -- docid nlp |
deepdive corenlp sentences-tsj docid content:nlp \
-- docid nlp.{index,tokens.{word,lemma,pos,ner,characterOffsetBegin}} \
nlp.collapsed-dependencies.{dep_type,dep_token}
Again, we mark it as executable for DeepDive to run it:
In [19]:
!chmod +x udf/nlp_markup.sh
Before executing this NLP markup step, we need to launch the CoreNLP server in advance, which may take a while to install and load everything. Note that the CoreNLP library requires Java 8 to run.
In [20]:
!deepdive corenlp install
# If CoreNLP seems to take forever to start, retry after uncommenting the following line:
%env CORENLP_JAVAOPTS=-Xmx4g
!deepdive corenlp start
In [21]:
!deepdive redo sentences
Now, if we take a look at a sample of the NLP markups, they will have tokens and NER tags that look like the following:
In [22]:
%%bash
deepdive query '
doc_id, index, tokens, ner_tags | 5
?- sentences(doc_id, index, tokens, lemmas, pos_tags, ner_tags, _, _, _).
'
In [23]:
%%file -a app.ddlog
## Candidate mapping ##########################################################
person_mention(
mention_id text,
mention_text text,
doc_id text,
sentence_index int,
begin_index int,
end_index int
).
We will be storing each person as a row referencing a sentence with beginning and ending indexes. Again, we next declare a function that references a UDF and takes as input the sentence tokens and NER tags:
In [24]:
%%file -a app.ddlog
function map_person_mention over (
doc_id text,
sentence_index int,
tokens text[],
ner_tags text[]
) returns rows like person_mention
implementation "udf/map_person_mention.py" handles tsj lines.
We'll write a simple UDF in Python that will tag spans of contiguous tokens with the NER tag PERSON
as person mentions (i.e., we'll essentially rely on CoreNLP's NER module).
Note that we've already used a Bash script as a UDF, and indeed any programming language can be used.
(DeepDive will just check the path specified in the top line, e.g., #!/usr/bin/env python
.)
However, DeepDive provides some convenient utilities for Python UDFs which handle all IO encoding/decoding.
To write our UDF udf/map_person_mention.py
, we'll start by specifying that our UDF will handle TSV lines (as specified in the DDlog above).
Additionally, we'll specify the exact type schema of both input and output, which DeepDive will check for us:
In [25]:
%%file udf/map_person_mention.py
#!/usr/bin/env python
from deepdive import *
@tsj_extractor
@returns(lambda
mention_id = "text",
mention_text = "text",
doc_id = "text",
sentence_index = "int",
begin_index = "int",
end_index = "int",
:[])
def extract(
doc_id = "text",
sentence_index = "int",
tokens = "text[]",
ner_tags = "text[]",
):
"""
Finds phrases that are continuous words tagged with PERSON.
"""
num_tokens = len(ner_tags)
# find all first indexes of series of tokens tagged as PERSON
first_indexes = (i for i in xrange(num_tokens) if ner_tags[i] == "PERSON" and (i == 0 or ner_tags[i-1] != "PERSON"))
for begin_index in first_indexes:
# find the end of the PERSON phrase (consecutive tokens tagged as PERSON)
end_index = begin_index + 1
while end_index < num_tokens and ner_tags[end_index] == "PERSON":
end_index += 1
end_index -= 1
# generate a mention identifier
mention_id = "%s_%d_%d_%d" % (doc_id, sentence_index, begin_index, end_index)
mention_text = " ".join(map(lambda i: tokens[i], xrange(begin_index, end_index + 1)))
# Output a tuple for each PERSON phrase
yield [
mention_id,
mention_text,
doc_id,
sentence_index,
begin_index,
end_index,
]
In [26]:
!chmod +x udf/map_person_mention.py
Above, we write a simple function which extracts and tags all subsequences of tokens having the NER tag "PERSON".
Note that the extract
function must be a generator (i.e., use a yield
statement to return output rows).
Finally, we specify that the function will be applied to rows from the sentences
table and append to the person_mention
table:
In [27]:
%%file -a app.ddlog
person_mention += map_person_mention(
doc_id, sentence_index, tokens, ner_tags
) :-
sentences(doc_id, sentence_index, tokens, _, _, ner_tags, _, _, _).
Again, to run, just compile and execute as in previous steps:
In [28]:
!deepdive redo person_mention
In [29]:
%%bash
deepdive query '
name, doc, sentence, begin, end | 20
?- person_mention(p_id, name, doc, sentence, begin, end).
'
Next, we'll take all pairs of non-overlapping person mentions that co-occur in a sentence with less than 5 people total, and consider these as the set of potential ('candidate') spouse mentions.
We thus filter out sentences with large numbers of people for the purposes of this tutorial; however, these could be included if desired.
Again, to start, we declare the schema for our spouse_candidate
table—here just the two names, and the two person_mention
IDs referred to:
In [30]:
%%file -a app.ddlog
spouse_candidate(
p1_id text,
p1_name text,
p2_id text,
p2_name text
).
Next, for this operation we don't use any UDF script, instead rely entirely on DDlog operations. We simply construct a table of person counts, and then do a join with our filtering conditions. In DDlog this looks like:
In [31]:
%%file -a app.ddlog
num_people(doc_id, sentence_index, COUNT(p)) :-
person_mention(p, _, doc_id, sentence_index, _, _).
spouse_candidate(p1, p1_name, p2, p2_name) :-
num_people(same_doc, same_sentence, num_p),
person_mention(p1, p1_name, same_doc, same_sentence, p1_begin, _),
person_mention(p2, p2_name, same_doc, same_sentence, p2_begin, _),
num_p < 5,
p1 < p2,
p1_name != p2_name,
p1_begin != p2_begin.
Now, let's tell DeepDive to run what we have so far:
In [32]:
!deepdive redo spouse_candidate
In [33]:
%%bash
deepdive query '
name1, name2, doc, sentence | 20
?- spouse_candidate(p1, name1, p2, name2),
person_mention(p1, _, doc, sentence, _, _).
'
In [34]:
%%file -a app.ddlog
## Feature Extraction #########################################################
# Feature extraction (using DDLIB via a UDF) at the relation level
spouse_feature(
p1_id text,
p2_id text,
feature text
).
The goal here is to represent each spouse candidate mention by a set of attributes or features which capture at least the key aspects of the mention, and then let a machine learning model learn how much each feature is correlated with our decision variable ('is this a spouse mention?').
For those who have worked with machine learning systems before, note that we are using a sparse storage representation-
you could think of a spouse candidate (p1_id, p2_id)
as being represented by a vector of length L = COUNT(DISTINCT feature)
, consisting of all zeros except for at the indexes specified by the rows with key (p1_id, p2_id)
.
DeepDive includes an automatic feature generation library, DDlib, which we will use here.
Although many state-of-the-art applications have been built using purely DDlib-generated features, others can be used and/or added as well.
To use DDlib, we create a list of ddlib.Word
objects, two ddlib.Span
objects, and then use the function get_generic_features_relation
, as shown in the following Python code for udf/extract_spouse_features.py
:
In [35]:
%%file udf/extract_spouse_features.py
#!/usr/bin/env python
from deepdive import *
import ddlib
@tsj_extractor
@returns(lambda
p1_id = "text",
p2_id = "text",
feature = "text",
:[])
def extract(
p1_id = "text",
p2_id = "text",
p1_begin_index = "int",
p1_end_index = "int",
p2_begin_index = "int",
p2_end_index = "int",
doc_id = "text",
sent_index = "int",
tokens = "text[]",
lemmas = "text[]",
pos_tags = "text[]",
ner_tags = "text[]",
dep_types = "text[]",
dep_parents = "int[]",
):
"""
Uses DDLIB to generate features for the spouse relation.
"""
# Create a DDLIB sentence object, which is just a list of DDLIB Word objects
sent = []
for i,t in enumerate(tokens):
sent.append(ddlib.Word(
begin_char_offset=None,
end_char_offset=None,
word=t,
lemma=lemmas[i],
pos=pos_tags[i],
ner=ner_tags[i],
dep_par=dep_parents[i] - 1, # Note that as stored from CoreNLP 0 is ROOT, but for DDLIB -1 is ROOT
dep_label=dep_types[i]))
# Create DDLIB Spans for the two person mentions
p1_span = ddlib.Span(begin_word_id=p1_begin_index, length=(p1_end_index-p1_begin_index+1))
p2_span = ddlib.Span(begin_word_id=p2_begin_index, length=(p2_end_index-p2_begin_index+1))
# Generate the generic features using DDLIB
for feature in ddlib.get_generic_features_relation(sent, p1_span, p2_span):
yield [p1_id, p2_id, feature]
In [36]:
!chmod +x udf/extract_spouse_features.py
Note that getting the input for this UDF requires joining the person_mention
and sentences
tables:
In [37]:
%%file -a app.ddlog
function extract_spouse_features over (
p1_id text,
p2_id text,
p1_begin_index int,
p1_end_index int,
p2_begin_index int,
p2_end_index int,
doc_id text,
sent_index int,
tokens text[],
lemmas text[],
pos_tags text[],
ner_tags text[],
dep_types text[],
dep_tokens int[]
) returns rows like spouse_feature
implementation "udf/extract_spouse_features.py" handles tsj lines.
spouse_feature += extract_spouse_features(
p1_id, p2_id, p1_begin_index, p1_end_index, p2_begin_index, p2_end_index,
doc_id, sent_index, tokens, lemmas, pos_tags, ner_tags, dep_types, dep_tokens
) :-
person_mention(p1_id, _, doc_id, sent_index, p1_begin_index, p1_end_index),
person_mention(p2_id, _, doc_id, sent_index, p2_begin_index, p2_end_index),
sentences(doc_id, sent_index, tokens, lemmas, pos_tags, ner_tags, _, dep_types, dep_tokens).
Now, let's execute this UDF to get our features:
In [38]:
!deepdive redo spouse_feature
If we take a look at a sample of the extracted features, they will look roughly like the following:
In [39]:
!deepdive query '| 20 ?- spouse_feature(_, _, feature).'
Now we have generated what looks more like the standard input to a machine learning problem—a set of objects, represented by sets of features, which we want to classify (here, as true or false mentions of a spousal relation). However, we don't have any supervised labels (i.e., a set of correct answers) for a machine learning algorithm to learn from! In most real world applications, a sufficiently large set of supervised labels is not available. With DeepDive, we take the approach sometimes referred to as distant supervision or data programming, where we instead generate a noisy set of labels using a mix of mappings from secondary datasets and other heuristic rules.
In this section, we'll use distant supervision (or 'data programming') to provide a noisy set of labels for candidate relation mentions, with which we will train a machine learning model.
We'll describe two basic categories of approaches:
Then, we'll describe a simple majority-vote approach to resolving multiple labels per example, which can be implemented within DDlog.
Let's declare a new table where we'll store the labels (referring to the spouse candidate mentions), with an integer value (True=1, False=-1
) and a description (rule_id
):
In [40]:
%%file -a app.ddlog
## Distant Supervision ########################################################
spouse_label(
p1_id text,
p2_id text,
label int,
rule_id text
).
Let's put all the spouse candidate mentions with a NULL
label. This is just for simplifying some steps later:
In [41]:
%%file -a app.ddlog
# make sure all pairs in spouse_candidate are considered as unsupervised examples
spouse_label(p1,p2, 0, NULL) :-
spouse_candidate(p1, _, p2, _).
First, we'll try using an external structured dataset of known married couples, from DBpedia, to distantly supervise our dataset. We'll download the relevant data, and then map it to our candidate spouse relations.
Our goal is to first extract a collection of known married couples from DBpedia and then load this into the spouses_dbpedia
table in our database.
To extract known married couples, we use the DBpedia dump present in Google's BigQuery platform.
First we extract the URI, name and spouse information from the DBpedia person
table records in BigQuery for which the field name
is not NULL.
We use the following query:
SELECT URI,name, spouse
FROM [fh-bigquery:dbpedia.person]
where name <> "NULL"
We store the result of the above query in a local project table dbpedia.validnames
and perform a self-join to obtain the pairs of married couples.
SELECT t1.name, t2.name
FROM [dbpedia.validnames] AS t1
JOIN EACH [dbpedia.validnames] AS t2
ON t1.spouse = t2.URI
The output of the above query is stored in a new table named dbpedia.spouseraw
.
Finally, we use the following query to remove symmetric duplicates.
SELECT p1, p2
FROM (SELECT t1_name as p1, t2_name as p2 FROM [dbpedia.spouseraw]),
(SELECT t2_name as p1, t1_name as p2 FROM [dbpedia.spouseraw])
WHERE p1 < p2
The output of this query is stored in a local file.
The file contains duplicate rows (BigQuery does not support distinct
).
It also contains noisy rows where the name field contains a string where the given name family name and multiple aliases were concatenated and reported in a string including the characters {
and }
.
Using the Unix commands sed
, sort
and uniq
we first remove the lines containing characters {
and }
and then duplicate entries.
This results in an input file spouses_dbpedia.csv
containing 6,126 entries of married couples.
Note that we made this spouses_dbpedia.csv
available for download from GitHub, so you don't have to repeat the above process.
In [42]:
%%file -a app.ddlog
# distant supervision using data from DBpedia
spouses_dbpedia(
person1_name text,
person2_name text
).
Notice that we can easily load the data in spouses_dbpedia.csv
data to the table we just declared if we follow DeepDive's convention of organizing input data under input/
directory.
The input file name simply needs to start with the target database table name.
Let's download the file from GitHub to input/spouses_dbpedia.csv.bz2
under our application:
In [43]:
!cd input && curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/input/spouses_dbpedia.csv.bz2"
Then execute this command to load it into the database:
In [44]:
!deepdive redo spouses_dbpedia
Now the database should include tuples that look like the following:
In [45]:
!deepdive query '| 20 ?- spouses_dbpedia(name1, name2).'
In [46]:
%%file -a app.ddlog
spouse_label(p1,p2, 1, "from_dbpedia") :-
spouse_candidate(p1, p1_name, p2, p2_name),
spouses_dbpedia(n1, n2),
[ lower(n1) = lower(p1_name), lower(n2) = lower(p2_name) ;
lower(n2) = lower(p1_name), lower(n1) = lower(p2_name) ].
It should be noted that there are many clear ways in which this rule could be improved (fuzzy matching, more restrictive conditions, etc.), but this serves as an example of one major type of distant supervision rule.
We can also create a supervision rule which does not rely on any secondary structured dataset like DBpedia, but instead just uses some heuristic.
We set up a DDlog function, supervise
, which uses a UDF containing several heuristic rules over the mention and sentence attributes:
In [47]:
%%file -a app.ddlog
# supervision by heuristic rules in a UDF
function supervise over (
p1_id text, p1_begin int, p1_end int,
p2_id text, p2_begin int, p2_end int,
doc_id text,
sentence_index int,
sentence_text text,
tokens text[],
lemmas text[],
pos_tags text[],
ner_tags text[],
dep_types text[],
dep_tokens int[]
) returns (
p1_id text, p2_id text, label int, rule_id text
)
implementation "udf/supervise_spouse.py" handles tsj lines.
spouse_label += supervise(
p1_id, p1_begin, p1_end,
p2_id, p2_begin, p2_end,
doc_id, sentence_index,
tokens, lemmas, pos_tags, ner_tags, dep_types, dep_token_indexes
) :-
spouse_candidate(p1_id, _, p2_id, _),
person_mention(p1_id, p1_text, doc_id, sentence_index, p1_begin, p1_end),
person_mention(p2_id, p2_text, _, _, p2_begin, p2_end),
sentences(
doc_id, sentence_index,
tokens, lemmas, pos_tags, ner_tags, _, dep_types, dep_token_indexes
).
The Python UDF named udf/supervise_spouse.py
contains several heuristic rules:
In [48]:
%%file udf/supervise_spouse.py
#!/usr/bin/env python
from deepdive import *
import random
from collections import namedtuple
SpouseLabel = namedtuple('SpouseLabel', 'p1_id, p2_id, label, type')
@tsj_extractor
@returns(lambda
p1_id = "text",
p2_id = "text",
label = "int",
rule_id = "text",
:[])
# heuristic rules for finding positive/negative examples of spouse relationship mentions
def supervise(
p1_id="text", p1_begin="int", p1_end="int",
p2_id="text", p2_begin="int", p2_end="int",
doc_id="text", sentence_index="int",
tokens="text[]", lemmas="text[]", pos_tags="text[]", ner_tags="text[]",
dep_types="text[]", dep_token_indexes="int[]",
):
# Constants
MARRIED = frozenset(["wife", "husband"])
FAMILY = frozenset(["mother", "father", "sister", "brother", "brother-in-law"])
MAX_DIST = 10
# Common data objects
p1_end_idx = min(p1_end, p2_end)
p2_start_idx = max(p1_begin, p2_begin)
p2_end_idx = max(p1_end,p2_end)
intermediate_lemmas = lemmas[p1_end_idx+1:p2_start_idx]
intermediate_ner_tags = ner_tags[p1_end_idx+1:p2_start_idx]
tail_lemmas = lemmas[p2_end_idx+1:]
spouse = SpouseLabel(p1_id=p1_id, p2_id=p2_id, label=None, type=None)
# Rule: Candidates that are too far apart
if len(intermediate_lemmas) > MAX_DIST:
yield spouse._replace(label=-1, type='neg:far_apart')
# Rule: Candidates that have a third person in between
if 'PERSON' in intermediate_ner_tags:
yield spouse._replace(label=-1, type='neg:third_person_between')
# Rule: Sentences that contain wife/husband in between
# (<P1>)([ A-Za-z]+)(wife|husband)([ A-Za-z]+)(<P2>)
if len(MARRIED.intersection(intermediate_lemmas)) > 0:
yield spouse._replace(label=1, type='pos:wife_husband_between')
# Rule: Sentences that contain and ... married
# (<P1>)(and)?(<P2>)([ A-Za-z]+)(married)
if ("and" in intermediate_lemmas) and ("married" in tail_lemmas):
yield spouse._replace(label=1, type='pos:married_after')
# Rule: Sentences that contain familial relations:
# (<P1>)([ A-Za-z]+)(brother|stster|father|mother)([ A-Za-z]+)(<P2>)
if len(FAMILY.intersection(intermediate_lemmas)) > 0:
yield spouse._replace(label=-1, type='neg:familial_between')
In [49]:
!chmod +x udf/supervise_spouse.py
Note that the rough theory behind this approach is that we don't need high-quality (e.g., hand-labeled) supervision to learn a high quality model. Instead, using statistical learning, we can in fact recover high-quality models from a large set of low-quality or noisy labels.
In [50]:
%%file -a app.ddlog
# resolve multiple labels by majority vote (summing the labels in {-1,0,1})
spouse_label_resolved(p1_id, p2_id, SUM(vote)) :-
spouse_label(p1_id, p2_id, vote, rule_id).
Then, we simply threshold and add these labels to our decision variable table has_spouse
(see next section for details here):
In [51]:
%%file -a app.ddlog
# assign the resolved labels for the spouse relation
has_spouse(p1_id, p2_id) = if l > 0 then TRUE
else if l < 0 then FALSE
else NULL end :- spouse_label_resolved(p1_id, p2_id, l).
Once again, to execute all of the above, just run the following command:
In [52]:
!deepdive redo has_spouse
Recall that deepdive do
will execute all upstream tasks as well, so this will execute all of the previous steps!
Now, we can take a brief look at how many candidates are supervised by different rules, which will look something like the table below. Obviously, the counts will vary depending on your input corpus.
In [53]:
!deepdive query 'rule, @order_by COUNT(1) ?- spouse_label(p1,p2, label, rule).'
Now, we need to specify the actual model that DeepDive will perform learning and inference over. At a high level, this boils down to specifying three things:
What are the variables of interest that we want DeepDive to predict for us?
What are the features for each of these variables?
What are the connections between the variables?
One we have specified the model in this way, DeepDive will learn the parameters of the model (the weights of the features and potentially the connections between variables), and then perform statistical inference over the learned model to determine the probability that each variable of interest is true.
For more advanced users: we are specifying a factor graph where the features are unary factors, and then using SGD and Gibbs sampling for learning and inference. Further technical detail is available here.
In our case, we have one variable to predict per spouse candidate mention, namely, is this mention actually indicating a spousal relation or not?
In other words, we want DeepDive to predict the value of a Boolean variable for each spouse candidate mention, indicating whether it is true or not.
Recall that we started this tutorial with specifying this at the beginning of app.ddlog
as follows:
ddlog
has_spouse?(
p1_id text,
p2_id text
).
DeepDive will predict not only the value of these variables, but also the marginal probabilities, i.e., the confidence level that DeepDive has for each individual prediction.
Next, we indicate (i) that each has_spouse
variable will be connected to the features of the corresponding spouse_candidate
row, (ii) that we wish DeepDive to learn the weights of these features from our distantly supervised data, and (iii) that the weight of a specific feature across all instances should be the same, as follows:
In [54]:
%%file -a app.ddlog
## Inference Rules ############################################################
# Features
@weight(f)
has_spouse(p1_id, p2_id) :-
spouse_feature(p1_id, p2_id, f).
Finally, we can specify dependencies between the prediction variables, with either learned or given weights.
Here, we'll specify two such rules, with fixed (given) weights that we specify.
First, we define a symmetry connection, namely specifying that if the model thinks a person mention p1
and a person mention p2
indicate a spousal relationship in a sentence, then it should also think that the reverse is true, i.e., that p2
and p1
indicate one too:
In [55]:
%%file -a app.ddlog
# Inference rule: Symmetry
@weight(3.0)
has_spouse(p1_id, p2_id) => has_spouse(p2_id, p1_id) :-
TRUE.
Next, we specify a rule that the model should be strongly biased towards finding one marriage indication per person mention. We do this inversely, using a negative weight, as follows:
In [56]:
%%file -a app.ddlog
# Inference rule: Only one marriage
@weight(-1.0)
has_spouse(p1_id, p2_id) => has_spouse(p1_id, p3_id) :-
TRUE.
In [57]:
!deepdive redo probabilities
This will ground the model based on the data in the database, learn the weights, infer the expectations or marginal probabilities of the variables in the model, and then load them back to the database.
Let's take a look at the probabilities inferred by DeepDive for the has_spouse
variables.
In [58]:
!deepdive sql 'SELECT p1_id, p2_id, expectation FROM has_spouse_inference ORDER BY random() LIMIT 20'
After finishing a pass of writing and running the DeepDive application, the first thing we want to see is how good the results are. In this section, we describe how DeepDive's interactive tools can be used for viewing the results as well as error analysis and debugging.
In [ ]:
!deepdive do calibration-plots
It will produce a file run/model/calibration-plots/has_spouse.png
that holds three plots as shown below:
Refer to the full documentation on calibration data for more detail on how to interpret the plots and take actions.
We need to give hints to DeepDive about which part of the data we want to browse using DDlog's annotation.
For example, on the articles
relation we declared earlier in app.ddlog
, we can sprinkle some annotations such as @source
, @key
, and @searchable
, as the following.
ddlog
@source
articles(
@key
id text,
@searchable
content text
).
The fully annotated DDlog code is available at GitHub and can be downloaded to replace your app.ddlog
by running the following command:
In [ ]:
!curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/app.ddlog"
Next, if we run the following command, DeepDive will create and populate a search index according to these hints.
In [ ]:
!mindbender search drop; mindbender search update
To access the populated search index through a web browser, run:
In [ ]:
!mindbender search gui
Then, point your browser to the URL that appears after the command (typically http://localhost:8000) to see a view that looks like the following:
To browse the results, we can add annotations to the inferred relations and how they relate to their source relations.
For example, the @extraction
and @references
annotations in the following DDlog declaration tells DeepDive that the variable relation has_spouse
is inferred from pairs of person_mention
.
ddlog
@extraction
has_spouse?(
@key
@references(relation="person_mention", column="mention_id", alias="p1")
p1_id text,
@key
@references(relation="person_mention", column="mention_id", alias="p2")
p2_id text
).
The relation person_mention
as well as the relations it references should have similar annotations (see the complete app.ddlog
code for full detail).
Then, repeating the commands to update the search index and load the user interface will allow us to browse the expected marginal probabilities of has_spouse
as well.
In fact, the screenshots above are showing the data presented using a carefully prepared set of templates under mindbender/search-templates/
.
In these AngularJS templates, virtually anything you can program in HTML/CSS/JavaScript/CoffeeScript can be added to present the data that is ideal for human consumption (e.g., highlighted text spans rather than token indexes).
Please see the documentation about customizing the presentation for further detail.
Mindtagger, which is part of the Mindbender tool suite, assists data labeling tasks to quickly assess the precision and/or recall of the extraction.
We show how Mindtagger helps us perform a labeling task to estimate the precision of the extraction.
The necessary set of files shown below already exist in the example under labeling/has_spouse-precision/
.
First, we can take a random sample of 100 examples from has_spouse
relation whose expectation is higher than or equal to a 0.9 threshold as shown in the following SQL query, and store them in a file called has_spouse.csv
.
In [ ]:
!mkdir -p labeling/has_spouse-precision/
In [ ]:
%%bash
deepdive sql eval "
SELECT hsi.p1_id
, hsi.p2_id
, s.doc_id
, s.sentence_index
, hsi.dd_label
, hsi.expectation
, s.tokens
, pm1.mention_text AS p1_text
, pm1.begin_index AS p1_start
, pm1.end_index AS p1_end
, pm2.mention_text AS p2_text
, pm2.begin_index AS p2_start
, pm2.end_index AS p2_end
FROM has_spouse_inference hsi
, person_mention pm1
, person_mention pm2
, sentences s
WHERE hsi.p1_id = pm1.mention_id
AND pm1.doc_id = s.doc_id
AND pm1.sentence_index = s.sentence_index
AND hsi.p2_id = pm2.mention_id
AND pm2.doc_id = s.doc_id
AND pm2.sentence_index = s.sentence_index
AND expectation >= 0.9
ORDER BY random()
LIMIT 100
" format=csv header=1 >labeling/has_spouse-precision/has_spouse.csv
We also prepare the mindtagger.conf
and template.html
files under labeling/has_spouse-precision/
that look like the following:
In [ ]:
%%file labeling/has_spouse-precision/mindtagger.conf
title: Labeling task for estimating has_spouse precision
items: {
file: has_spouse.csv
key_columns: [p1_id, p2_id]
}
template: template.html
In [ ]:
%%file labeling/has_spouse-precision/template.html
<mindtagger mode="precision">
<template for="each-item">
<strong title="item_id: {{item.id}}">{{item.p1_text}} -- {{item.p2_text}}</strong>
with expectation <strong>{{item.expectation | number:3}}</strong> appeared in:
<blockquote>
<big mindtagger-word-array="item.tokens" array-format="json">
<mindtagger-highlight-words from="item.p1_start" to="item.p1_end" with-style="background-color: yellow;"/>
<mindtagger-highlight-words from="item.p2_start" to="item.p2_end" with-style="background-color: cyan;"/>
</big>
</blockquote>
<div>
<div mindtagger-item-details></div>
</div>
</template>
<template for="tags">
<span mindtagger-adhoc-tags></span>
<span mindtagger-note-tags></span>
</template>
</mindtagger>
In [ ]:
!mindbender tagger labeling/has_spouse-precision/mindtagger.conf
Then, point your browser to the URL that appears after the command (typically http://localhost:8000) to see a dedicated user interface for labeling data that looks like the following:
We can quickly label the sampled 100 examples using the intuitive user interface with buttons for correct/incorrect tags. It also supports keyboard shortcuts for entering labels and moving between items. (Press the ? key to view all supported keys.) How many were labeled correct, as well as other tags, are shown in the "Tags" dropdown at the top right corner as shown below.
The collected tags can also be exported in various format for post-processing.
For further detail, see the documentation about labeling data.
Dashboard provides a way to monitor various descriptive statistics of the data products after each pass of DeepDive improvements. We can use a combination of SQL, any Bash script, and Markdown in each report template that produces a report, and we can produce a collection of them as a snapshot against the data extracted by DeepDive. Dashboard provides a structure to manage those templates and instantiate them in a sophisticated way using parameters. It provides a graphical interface for visualizing the collected statistics and trends as shown below. Refer to the full documentation on Dashboard to set up your own set of reports.